LLM-based Chatbot for Customer Service in Banking

LLM-based Chatbot for Customer Service in Banking

Financial institutions are increasingly leveraging Chatbots and other similar solutions to enhance customer service, streamline operations, and improve overall user experiences.

These technologies facilitate the seamless integration of corporate data into AI assistants, enhancing the utility of large language models (LLMs) for various applications like handling customer inquiries, processing transactions, and automating repetitive tasks. While consumers are rapidly adopting AI-powered LLMs, banks are proceeding with caution, mindful of its limitations and security risks.

Customer Service Landscape in Singapore

According to a recent report by American software company ServiceNow, Singaporeans spent more than 30 million hours on hold with customer service last year, as reported by CNA in November 2024.

While call volumes have declined over the years, most banks still receive high numbers of queries from customers. DBS reported that its 500-strong customer service officer workforce manages over 250,000 queries from customers each month. This amounts to more than 3 million queries per year. OCBC reported receiving 1.4 million calls in 2024.

LLM-based Chatbot

Let us take a look at how some financial institutions have been embracing generative AI technology:

  • DBS launched a generative AI virtual assistant for its customer service workforce in 2024.
  • JPMorgan Chase has rolled out its own version of OpenAI’s ChatGPT that can perform the work of a research analyst, according to the Financial Times.
  • OCBC is deploying OCBC GPT, a generative AI chatbot powered by Microsoft’s Azure OpenAI, to its 30,000 employees globally. The tool is designed to assist with writing, research, and ideation, boosting productivity and enhancing customer service.
As technology evolves, we must be mindful of some of the limitations and security risks that it might pose:

Limitations:

  • Data Dependency: Generative models are only as good as the datasets they are trained on. Flawed, biased, or incomplete data can lead to inaccurate outputs and unreliable predictions.
  • Hallucinations: LLMs may sometimes generate results that seem plausible but are factually incorrect or nonsensical, a phenomenon known as “hallucinations”. This is problematic in scenarios where accuracy is paramount.
  • Explainability: The inner workings of complex AI systems can be difficult to interpret, raising concerns about transparency and accountability, particularly for professionals relying on generative AI for accounting or other critical financial tasks.

Security Risks:

  • Unauthorised Transactions: A chatbot that hasn’t been properly restricted could be manipulated into conducting financial transactions or altering account details. In instances where an AI chatbot is specifically designed to make transactions, it could mistakenly process a payment using inaccurate details.
  • Breach of Privacy: A chatbot with excessive autonomy could access and share sensitive data beyond its authorisation. This could result in personal information being leaked to unauthorised parties or sensitive data being stored in insecure locations.

Risk Mitigations

To prevent these issues, developers and cybersecurity professionals should take steps to ensure chatbots are restricted from operating outside their intended scope.

  • Define Goals Clearly: Chatbots should be programmed with well-defined goals and limitations. This ensures they stay focused on their intended tasks and avoid unauthorised actions.
  • Multi-Factor Authentication: For tasks with potential security risks, implementing multi-factor authentication helps to limit unauthorised access. This could involve requiring user confirmation or human oversight before the chatbot completes sensitive actions.
  • User Control Mechanisms: Provide clear and easy-to-use mechanisms for users to regain control of the chatbot. This could involve offering options to switch to a human agent, cancel ongoing actions, or report any suspicious behaviour.

In conclusion, integrating ChatGPT and similar AI solutions into financial services could enhance customer service and operational efficiency. However, despite the benefits and enhanced productivity, these technologies pose limitations and security risks.

Financial institutions must ensure high-quality data training, transparency, and robust security measures to prevent unauthorised transactions and data breaches. As banks continue to leverage generative AI to enhance work processes, they need to practice vigilance and employ proactive measures to mitigate risks while discovering AI's potential in banking.

References

  1. Sergiienko, B (2024). Why Generative AI in Banking Is A Secret Weapon: Your Blueprint for Implementation. https://masterofcode.com/blog/generative-ai-in-banking
  2. Brunner, U (2023). Beyond Queries: How ChatGPT in Banking Unleashes Innovation. https://fintechnews.sg/81390/studies/chat-gpt-in-banking-innovation/
  3. Lim, R-A (2024). DBS launches generative AI virtual assistant for customer service workforce (BT). https://www.businesstimes.com.sg/companies-markets/dbs-launches-generative-ai-virtual-assistant-customer-service-workforce
  4. NTan, F (2023). Banking’s balancing act with technology (The Edge). https://www.theedgesingapore.com/sff/singapore-fintech-festival-2023/bankings-balancing-act-technology


Author

Contact Information: Steven Ng
School of Information Technology
Nanyang Polytechnic
E-mail: [email protected]

Steven Ng is a Senior Lecturer at Nanyang Polytechnic’s School of Information Technology, with five years of experience teaching Machine Learning and Artificial Intelligence. Prior to joining NYP, He was in the financial industry for more than 15 years. In his last role, he supported the front-office trading systems in the fixed income market, emerging market and counterparty risk trading.